497 research outputs found

    Guiding of Rydberg atoms in a high-gradient magnetic guide

    Full text link
    We study the guiding of 87^{87}Rb 59D5/2_{5/2} Rydberg atoms in a linear, high-gradient, two-wire magnetic guide. Time delayed microwave ionization and ion detection are used to probe the Rydberg atom motion. We observe guiding of Rydberg atoms over a period of 5 ms following excitation. The decay time of the guided atom signal is about five times that of the initial state. We attribute the lifetime increase to an initial phase of ll-changing collisions and thermally induced Rydberg-Rydberg transitions. Detailed simulations of Rydberg atom guiding reproduce most experimental observations and offer insight into the internal-state evolution

    On Geometric Variational Models for Inpainting Surface Holes

    Get PDF
    Geometric approaches for filling-in surface holes are introduced and studied in this paper. The basic idea is to represent the surface of interest in implicit form, and fill-in the holes with a scalar, or systems of, geometric partial differential equations, often derived from optimization principles. These equations include a system for the joint interpolation of scalar and vector fields, a Laplacian-based minimization, a mean curvature diffusion flow, and an absolutely minimizing Lipschitz extension. The theoretical and computational framework, as well as examples with synthetic and real data, are presented in this paper

    Robust Large Margin Deep Neural Networks

    Get PDF
    The generalization error of deep neural networks via their classification margin is studied in this paper. Our approach is based on the Jacobian matrix of a deep neural network and can be applied to networks with arbitrary nonlinearities and pooling layers, and to networks with different architectures such as feed forward networks and residual networks. Our analysis leads to the conclusion that a bounded spectral norm of the network's Jacobian matrix in the neighbourhood of the training samples is crucial for a deep neural network of arbitrary depth and width to generalize well. This is a significant improvement over the current bounds in the literature, which imply that the generalization error grows with either the width or the depth of the network. Moreover, it shows that the recently proposed batch normalization and weight normalization reparametrizations enjoy good generalization properties, and leads to a novel network regularizer based on the network's Jacobian matrix. The analysis is supported with experimental results on the MNIST, CIFAR-10, LaRED, and ImageNet datasets

    Lessons from the Rademacher complexity for deep learning

    Get PDF
    Understanding the generalization properties of deep learning models is critical for successful applications, especially in the regimes where the number of training samples is limited. We study the generalization properties of deep neural networks via the empirical Rademacher complexity and show that it is easier to control the complexity of convolutional networks compared to general fully connected networks. In particular, we justify the usage of small convolutional kernels in deep networks as they lead to a better generalization error. Moreover, we propose a representation based regularization method that allows to decrease the generalization error by controlling the coherence of the representation. Experiments on the MNIST dataset support these foundations

    the notion of field from a transnational perspective: the theory of social differentiation under the prism of global history

    Get PDF
    Referência para o artigo original: Sapiro, Gisèle. Le champ est-il national ? La théorie de la différenciation sociale au prisme de l’histoire globale. Actes de la recherche en sciences sociales, (N°200), p. 70-85, 2013/5. DOI 10.3917/arss.200.007

    Coded aperture compressive temporal imaging.

    Get PDF
    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot

    Computer vision tools for the non-invasive assessment of autism-related behavioral markers

    Get PDF
    The early detection of developmental disorders is key to child outcome, allowing interventions to be initiated that promote development and improve prognosis. Research on autism spectrum disorder (ASD) suggests behavioral markers can be observed late in the first year of life. Many of these studies involved extensive frame-by-frame video observation and analysis of a child's natural behavior. Although non-intrusive, these methods are extremely time-intensive and require a high level of observer training; thus, they are impractical for clinical and large population research purposes. Diagnostic measures for ASD are available for infants but are only accurate when used by specialists experienced in early diagnosis. This work is a first milestone in a long-term multidisciplinary project that aims at helping clinicians and general practitioners accomplish this early detection/measurement task automatically. We focus on providing computer vision tools to measure and identify ASD behavioral markers based on components of the Autism Observation Scale for Infants (AOSI). In particular, we develop algorithms to measure three critical AOSI activities that assess visual attention. We augment these AOSI activities with an additional test that analyzes asymmetrical patterns in unsupported gait. The first set of algorithms involves assessing head motion by tracking facial features, while the gait analysis relies on joint foreground segmentation and 2D body pose estimation in video. We show results that provide insightful knowledge to augment the clinician's behavioral observations obtained from real in-clinic assessments

    Learning to Identify While Failing to Discriminate

    Get PDF
    Privacy and fairness are critical in computer vision applications, in particular when dealing with human identification. Achieving a universally secure, private, and fair systems is practically impossible as the exploitation of additional data can reveal private information in the original one. Faced with this challenge, we propose a new line of research, where the privacy is learned and used in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. We design a system that learns to succeed on the positive task while simultaneously fail at the negative one, and illustrate this with challenging cases where the positive task (face verification) is harder than the negative one (gender classification). The framework opens the door to privacy and fairness in very important closed scenarios, ranging from private data accumulation companies to law-enforcement and hospitals
    • …
    corecore